Fast followers?/Speed contagion: assessing the impact of Montreal F1 Grand Prix on high-speed ticketing rates (2000-2022)

Step 1. Weather variables. We defined the dates by year and availability (whether the event took place or not), days of the week, time windows, and pre and post-event spans, which we then linked to nearby meteorological stations. Also we linked with Collisions data.

Author

Andrés González Santa Cruz

Published

July 7, 2025

Code
# remove objects and memory
rm(list=ls());gc()
          used (Mb) gc trigger (Mb) max used (Mb)
Ncells  835773 44.7    1627308   87  1176511 62.9
Vcells 1680897 12.9    8388608   64  3434063 26.2
Code
#remove images
while(!dev.cur())dev.off()
cat("\014")
Code
load(paste0(getwd(),"/_data/step1.RData"))

Load libraries and data

Particularly, the weathercan library, which contains information on climate change and environmental variables from stations across Canada1.

Code
#borrar caché
#system("fc-cache -f -v")

#check R version
if(Sys.info()["sysname"]=="Windows"){
if (getRversion() != "4.4.1") { stop("Requiere versión de R 4.4.1. Actual: ", getRversion()) }
}
if(Sys.info()["sysname"]=="Linux"){
if (getRversion() != "4.4.1") { stop("Requiere versión de R 4.4.1. Actual: ", getRversion()) }
}
#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:
#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:#:

# install.packages(c("dplyr", #for data
#                    "tidyr", #for data
#                    "lubridate",  #for dates
#                    "openxlsx", #for excel files
#                    "rio", #for importing and exporting data
#                    "purrr", #for iterating in databases
#                    "devtools", #for external packages
#                    "DiagrammeR", #para visualizar DAG
#                    "dagitty",
#                    "ggdag",
#                    "ggplot2" #for graphics
#                    "kableExtra", #pretty tables
#                    "quarto", #for documents
#                    "geosphere" #for coordinates and classifying
#                    "geepack", #for regression
#                    "glmmTMB", #For GLMMs
#                    "DHARMa", #For residual diagnostics
#                    "car", #For hypothesis testing
#                    "brms", #Bayesian model
#                    "bayesplot",
#                    "loo",
#                    "Synth", #for synthetic control method
#                    "weathercan",#for weather data
#                    "sandwich", #cluster robust intervals
#                    "emmeans", #for predictions
#                    "gnm", #Conditional Poisson models
#                    "splines", #nonlinearity 
#                    "geeM", #negative binomial and more flexible GEE models
#                    "PanelMatch", #Matching technique with panel data
#                    "scpi" #control sintético
#                    "nixtlar", #for time series analysis and prediction
#                    "CausalImpact" #for time series causal impact
#                    "forecast" #for time series analysis prediction and decomposition
#                    ))
library(dplyr); library(lubridate); library(tidyverse); library(openxlsx); library(rio); library(purrr); library(dagitty); library(ggdag); library(kableExtra); library(geosphere); library(geepack); library(lme4); library(glmmTMB); library(DHARMa); library(car); library(brms); library(bayesplot); library(loo); library(Synth);  library(weathercan); library(sandwich); library(emmeans); library(gnm); library(splines); library(geeM); library(plm); library(PanelMatch); library(scpi); library(fect); library(nixtlar); library(CausalImpact); library(forecast); library(webshot)

Adjuntando el paquete: 'dplyr'
The following objects are masked from 'package:stats':

    filter, lag
The following objects are masked from 'package:base':

    intersect, setdiff, setequal, union

Adjuntando el paquete: 'lubridate'
The following objects are masked from 'package:base':

    date, intersect, setdiff, union
Warning: package 'readr' was built under R version 4.4.3
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ forcats 1.0.0     ✔ stringr 1.5.1
✔ ggplot2 3.5.2     ✔ tibble  3.2.1
✔ purrr   1.0.4     ✔ tidyr   1.3.1
✔ readr   2.1.5     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors

Adjuntando el paquete: 'dagitty'


The following object is masked from 'package:rio':

    convert



Adjuntando el paquete: 'ggdag'


The following object is masked from 'package:stats':

    filter



Adjuntando el paquete: 'kableExtra'


The following object is masked from 'package:dplyr':

    group_rows


Cargando paquete requerido: Matrix


Adjuntando el paquete: 'Matrix'


The following objects are masked from 'package:tidyr':

    expand, pack, unpack



Adjuntando el paquete: 'lme4'


The following object is masked from 'package:rio':

    factorize


This is DHARMa 0.4.7. For overview type '?DHARMa'. For recent changes, type news(package = 'DHARMa')

Cargando paquete requerido: carData


Adjuntando el paquete: 'car'


The following object is masked from 'package:purrr':

    some


The following object is masked from 'package:dplyr':

    recode


Cargando paquete requerido: Rcpp

Loading 'brms' package (version 2.22.0). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').


Adjuntando el paquete: 'brms'


The following object is masked from 'package:glmmTMB':

    lognormal


The following object is masked from 'package:lme4':

    ngrps


The following object is masked from 'package:stats':

    ar


This is bayesplot version 1.12.0

- Online documentation and vignettes at mc-stan.org/bayesplot

- bayesplot theme set to bayesplot::theme_default()

   * Does _not_ affect other ggplot2 plots

   * See ?bayesplot_theme_set for details on theme setting


Adjuntando el paquete: 'bayesplot'


The following object is masked from 'package:brms':

    rhat


This is loo version 2.8.0

- Online documentation and vignettes at mc-stan.org/loo

- As of v2.0.0 loo defaults to 1 core but we recommend using as many as possible. Use the 'cores' argument or set options(mc.cores = NUM_CORES) for an entire session. 

- Windows 10 users: loo may be very slow if 'mc.cores' is set in your .Rprofile file (see https://github.com/stan-dev/loo/issues/94).

##
## Synth Package: Implements Synthetic Control Methods.


## See https://web.stanford.edu/~jhain/synthpage.html for additional information.
Warning: package 'weathercan' was built under R version 4.4.3
As of v0.7.2, the `normals` column in `stations()` reflects whether or not there
are *any* normals available (not just the most recent).
Welcome to emmeans.
Caution: You lose important information if you filter this package's results.
See '? untidy'
Registered S3 method overwritten by 'lfe':
  method    from 
  nobs.felm broom

Adjuntando el paquete: 'plm'

The following objects are masked from 'package:dplyr':

    between, lag, lead


Adjuntando el paquete: 'PanelMatch'

The following object is masked from 'package:tidyr':

    extract

The following object is masked from 'package:stats':

    weights

Registered S3 method overwritten by 'GGally':
  method from   
  +.gg   ggplot2
Cargando paquete requerido: bsts
Cargando paquete requerido: BoomSpikeSlab
Cargando paquete requerido: Boom

Adjuntando el paquete: 'Boom'

The following objects are masked from 'package:brms':

    ddirichlet, rdirichlet

The following object is masked from 'package:stats':

    rWishart


Adjuntando el paquete: 'BoomSpikeSlab'

The following object is masked from 'package:stats':

    knots

Cargando paquete requerido: zoo

Adjuntando el paquete: 'zoo'

The following objects are masked from 'package:base':

    as.Date, as.Date.numeric

Cargando paquete requerido: xts

######################### Warning from 'xts' package ##########################
#                                                                             #
# The dplyr lag() function breaks how base R's lag() function is supposed to  #
# work, which breaks lag(my_xts). Calls to lag(my_xts) that you type or       #
# source() into this session won't work correctly.                            #
#                                                                             #
# Use stats::lag() to make sure you're not using dplyr::lag(), or you can add #
# conflictRules('dplyr', exclude = 'lag') to your .Rprofile to stop           #
# dplyr from breaking base R's lag() function.                                #
#                                                                             #
# Code in packages is not affected. It's protected by R's namespace mechanism #
# Set `options(xts.warn_dplyr_breaks_lag = FALSE)` to suppress this warning.  #
#                                                                             #
###############################################################################

Adjuntando el paquete: 'xts'

The following objects are masked from 'package:dplyr':

    first, last


Adjuntando el paquete: 'bsts'

The following object is masked from 'package:BoomSpikeSlab':

    SuggestBurn

Registered S3 method overwritten by 'quantmod':
  method            from
  as.zoo.data.frame zoo 

Adjuntando el paquete: 'forecast'

The following object is masked from 'package:brms':

    ma
Code
#special repository indicated or the package
if(!require(weathercan)){
   install.packages("weathercan", 
                  repos = c("https://ropensci.r-universe.dev", "https://cloud.r-project.org")); library(weathercan)
  }

#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_
#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_

if(!require(bpmn)){devtools::install_github("bergant/bpmn")}
Cargando paquete requerido: bpmn
Code
#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_
sum_dates <- function(x){
 
  cbind.data.frame(
    min= as.Date(min(unclass(as.Date(x)), na.rm=T), origin = "1970-01-01"),
    p001= as.Date(quantile(unclass(as.Date(x)), .001, na.rm=T), origin = "1970-01-01"),
    p005= as.Date(quantile(unclass(as.Date(x)), .005, na.rm=T), origin = "1970-01-01"),
    p025= as.Date(quantile(unclass(as.Date(x)), .025, na.rm=T), origin = "1970-01-01"),
    p25= as.Date(quantile(unclass(as.Date(x)), .25, na.rm=T), origin = "1970-01-01"),
    p50= as.Date(quantile(unclass(as.Date(x)), .5, na.rm=T), origin = "1970-01-01"),
    p75= as.Date(quantile(unclass(as.Date(x)), .75, na.rm=T), origin = "1970-01-01"),
    p975= as.Date(quantile(unclass(as.Date(x)), .975, na.rm=T), origin = "1970-01-01"),
    p995= as.Date(quantile(unclass(as.Date(x)), .995, na.rm=T), origin = "1970-01-01"),
    p999= as.Date(quantile(unclass(as.Date(x)), .999, na.rm=T), origin = "1970-01-01"),
    max= as.Date(max(unclass(as.Date(x)), na.rm=T), origin = "1970-01-01")
  )
}
smd_bin <- function(x,y){
  z <- x*(1-x)
  t <- y*(1-y)
  k <- sum(z,t)
  l <- k/2
  
  return((x-y)/sqrt(l))
  
}

theme_custom_sjplot2 <- function(base_size = 12, base_family = "") {
  theme_minimal(base_size = base_size, base_family = base_family) +
    theme(
      # Text elements
      text = element_text(size = base_size, family = base_family),
      plot.title = element_text(face = "bold", hjust = 0.5, size = base_size * 1.2),
      plot.subtitle = element_text(hjust = 0.5, margin = margin(b = 10)),
      axis.title = element_text(size = base_size, face = "bold"),
      axis.text = element_text(size = base_size * 0.8),
      axis.text.x = element_text(angle = 0, hjust = 0.5, vjust = 0.5),
      axis.text.y = element_text(angle = 0, hjust = 1, vjust = 0.5),
      axis.title.x = element_text(margin = margin(t = 10)),
      axis.title.y = element_text(margin = margin(r = 10)),
      
      # Plot layout
      plot.margin = margin(t = 20, r = 20, b = 20, l = 20),
      panel.grid.major = element_line(color = "grey80"),
      panel.grid.minor = element_blank(),
      legend.position = "right",
      legend.text = element_text(size = base_size * 0.8),
      legend.title = element_text(size = base_size, face = "bold"),
      legend.background = element_rect(fill = "white", colour = NA),
      legend.box.background = element_rect(colour = "grey80", linetype = "solid"),
      legend.key = element_rect(fill = "white", colour = "white")
    )
}

#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_
#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_
num_cores <- parallel::detectCores() -1
data.table::setDTthreads(threads = num_cores)#restore_after_fork = NULL, throttle = NULL)

#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_
#CONFIG #######################################################################
#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_#_

options(scipen=2) #display numbers rather scientific number


nixtlar::nixtla_client_setup(api_key = readLines(paste0(gsub("f1/","", getwd() ),"/key.txt"))[[2]])
Warning in readLines(paste0(gsub("f1/", "", getwd()), "/key.txt")): incomplete
final line found on 'H:/Mi unidad/PERSONAL
ANDRES/UCH_salud_publica/pasantia/f1/key.txt'
API key has been set for the current session.

Collapse

We collapsed data by aggregating it at the day–year level for treatment and control areas. For each cell, we summed the number of high-speed tickets and documented vehicles and license holders, and averaged the minimum temperature, maximum temperature, while the 2-day-lagged precipitation and total daily precipitation were summarized by the median. We restricted post-outcome period to 7 days.

Code
#total_precip_median_lin lag_2_prec_median_lin_imp
if(identical(collisions_weather_corr_rect$lag_2_prec_median_lin_imp, collisions_weather_corr_rect$total_precip_median_lin)){warning("Identical lagged with not lagged variable")}


collisions_weather_corr_rect_synth <-
  collisions_weather_corr_rect |>
  filter(date_num <= unclass(race_date) + 7) |>
  group_by(tr_contr_sens, year, yday_corr, date) |>
  summarise(
    sum_velocidad = sum(velocidad, na.rm = TRUE),
    exp_veh       = sum(vehicles_use_type,        na.rm = TRUE), # Nº de vehículos expuestos
    exp_lic       = sum(license_holders_sex_age,  na.rm = TRUE), # Nº de conductores
    median_lag_2_prec_median_imp = median(lag_2_prec_median_lin_imp, na.rm = TRUE),
    mean_min_temp_mean_lin       = mean(min_temp_mean_lin,       na.rm = TRUE),
    mean_max_temp_mean_lin       = mean(max_temp_mean_lin,       na.rm = TRUE),
    median_total_precip_median_lin = median(total_precip_median_lin, na.rm = TRUE),
    .groups = "drop"
  ) |>
  mutate(                                     # ── aux variables
    rate_veh = sum_velocidad / exp_veh * 1e6, # not to model
    rate_lic = sum_velocidad / exp_lic * 1e6, # not to model
    off_veh  = log(exp_veh),                  # offset
    off_lic  = log(exp_lic)
  ) |>
  left_join(races, by = c("date" = "race_date")) |>
  mutate(
    race_day = if_else(!is.na(year.y), 1L, 0L),
    id       = as.numeric(factor(paste0(tr_contr_sens, year.x)))
  ) |>
  select(-year.y)


if(identical(collisions_weather_corr_rect_synth$median_total_precip_median_lin, collisions_weather_corr_rect_synth$median_lag_2_prec_median_imp)){warning("Identical lagged with not lagged variable")}

cat("Day of the race \n")
collisions_weather_corr_rect_synth |> 
  filter(race_day==1) |> reframe(min=yday_corr, max=yday_corr) |>  print(n=40)

############################################################
## 1.  “Only-treated, one series”  ─────────────────────────
##     (collisions_weather_corr_rect_synth_tr)
############################################################
cat("Generate a summary of counts of every year in treated only\n")
collisions_weather_corr_rect_synth_tr <- 
  collisions_weather_corr_rect_synth %>%              # start from the long table
  filter(tr_contr_sens == "treatment") %>%            # keep treated rows only
  group_by(tr_contr_sens, yday_corr) %>%              # collapse across MRC × years
  summarise(
    sum_velocidad = sum(sum_velocidad, na.rm = TRUE),
    exp_veh       = sum(exp_veh,       na.rm = TRUE),
    exp_lic       = sum(exp_lic,       na.rm = TRUE),
    mean_median_lag_2_prec_median_imp   = mean(median_lag_2_prec_median_imp,   na.rm = TRUE),
    mean_min_temp_mean_lin              = mean(mean_min_temp_mean_lin,         na.rm = TRUE),
    mean_max_temp_mean_lin              = mean(mean_max_temp_mean_lin,         na.rm = TRUE),
    mean_median_total_precip_median_lin = mean(median_total_precip_median_lin, na.rm = TRUE),
    .groups = "drop"
  ) %>% 
  mutate(
    rate_veh = sum_velocidad / exp_veh * 1e6,
    rate_lic = sum_velocidad / exp_lic * 1e6,
    off_veh  = log(exp_veh),
    off_lic  = log(exp_lic),
    year.x   = 2000,                       # dummy year / date just to keep structure
    date     = as.Date("2019-05-19"),
    race_day = 0L,
    id       = 40                          # single ID for the synthetic treated series
  )

############################################################
## 2.  “One treated - many controls”  ─────────────────────
##     (collisions_weather_corr_rect_synth_one_tr_many_cntrls)
############################################################

# controls left at their original resolution (one ID per MRC–year)
controls_long <- collisions_weather_corr_rect_synth %>% 
  filter(tr_contr_sens != "treatment") %>% 
  rename(
    mean_median_lag_2_prec_median_imp   = median_lag_2_prec_median_imp,
    mean_median_total_precip_median_lin = median_total_precip_median_lin
  )

collisions_weather_corr_rect_synth_one_tr_many_cntrls <- bind_rows(
  controls_long,
  collisions_weather_corr_rect_synth_tr          # the single treated series from step 1
)

############################################################
## 3.  “Only-controls, one series”  ───────────────────────
##     (collisions_weather_corr_rect_synth_cntr)
############################################################

collisions_weather_corr_rect_synth_cntr <- 
  collisions_weather_corr_rect_synth %>% 
  filter(tr_contr_sens != "treatment") %>%          # keep controls
  group_by(tr_contr_sens, yday_corr) %>%            # collapse across all control IDs
  summarise(
    sum_velocidad = sum(sum_velocidad, na.rm = TRUE),
    exp_veh       = sum(exp_veh,       na.rm = TRUE),
    exp_lic       = sum(exp_lic,       na.rm = TRUE),
    
    # distributional summaries to keep a sense of variability
    p25_lag_2_prec_median_imp   = quantile(median_lag_2_prec_median_imp,   .25, na.rm = TRUE),
    p75_lag_2_prec_median_imp   = quantile(median_lag_2_prec_median_imp,   .75, na.rm = TRUE),
    mean_median_lag_2_prec_median_imp   = mean(median_lag_2_prec_median_imp,   na.rm = TRUE),
    
    p25_min_temp_mean_lin = quantile(mean_min_temp_mean_lin, .25, na.rm = TRUE),
    p75_min_temp_mean_lin = quantile(mean_min_temp_mean_lin, .75, na.rm = TRUE),
    mean_min_temp_mean_lin = mean(mean_min_temp_mean_lin, na.rm = TRUE),
    
    p25_max_temp_mean_lin = quantile(mean_max_temp_mean_lin, .25, na.rm = TRUE),
    p75_max_temp_mean_lin = quantile(mean_max_temp_mean_lin, .75, na.rm = TRUE),
    mean_max_temp_mean_lin = mean(mean_max_temp_mean_lin, na.rm = TRUE),
    
    p25_total_precip_median_lin = quantile(median_total_precip_median_lin, .25, na.rm = TRUE),
    p75_total_precip_median_lin = quantile(median_total_precip_median_lin, .75, na.rm = TRUE),
    mean_median_total_precip_median_lin = mean(median_total_precip_median_lin, na.rm = TRUE),
    .groups = "drop"
  ) %>% 
  mutate(
    rate_veh = sum_velocidad / exp_veh * 1e6,
    rate_lic = sum_velocidad / exp_lic * 1e6,
    off_veh  = log(exp_veh),
    off_lic  = log(exp_lic),
    year.x   = 2000,
    date     = as.Date("2019-05-19"),
    race_day = 0L,
    id       = 1                               # single ID for the synthetic control
  ) %>% 
  # rename the median variables so their names match the treated table
  rename(
    mean_median_lag_2_prec_median_imp   = mean_median_lag_2_prec_median_imp,
    mean_median_total_precip_median_lin = mean_median_total_precip_median_lin
  )

############################################################
## Quick sanity checks
############################################################
# glimpse(collisions_weather_corr_rect_synth_tr)
# glimpse(collisions_weather_corr_rect_synth_one_tr_many_cntrls)
# glimpse(collisions_weather_corr_rect_synth_cntr)

cat("Check imbalances\n")
collisions_weather_corr_rect_synth_one_tr_many_cntrls |> group_by(id) |> reframe(max=max(yday_corr), min=min(yday_corr), n=n()) |> tail()


invisible("Consolidate one treated and one control")
collisions_weather_corr_rect_synth_cntr_tr <- 
rbind.data.frame(collisions_weather_corr_rect_synth_cntr,collisions_weather_corr_rect_synth_tr |> 
  # rename(collisions_weather_corr_rect_synth_tr, "median_lag_2_prec_mean_imp"= "mean_lag_2_prec_mean_imp", "median_total_precip_mean_lin"= "mean_total_precip_mean_lin")|>  
  mutate(p25_lag_2_prec_median_imp=0, p75_lag_2_prec_median_imp=0, p25_min_temp_mean_lin=0, p75_min_temp_mean_lin=0, p25_max_temp_mean_lin=0, p75_max_temp_mean_lin=0, p25_total_precip_median_lin=0, p75_total_precip_median_lin=0))|> data.frame()
Day of the race 
# A tibble: 40 × 2
     min   max
   <int> <int>
 1    36    36
 2    36    36
 3    36    36
 4    36    36
 5    36    36
 6    36    36
 7    36    36
 8    36    36
 9    36    36
10    36    36
11    36    36
12    36    36
13    36    36
14    36    36
15    36    36
16    36    36
17    36    36
18    36    36
19    36    36
20    36    36
21    36    36
22    36    36
23    36    36
24    36    36
25    36    36
26    36    36
27    36    36
28    36    36
29    36    36
30    36    36
31    36    36
32    36    36
33    36    36
34    36    36
35    36    36
36    36    36
37    36    36
38    36    36
39    36    36
40    36    36
Generate a summary of counts of every year in treated only
Check imbalances
# A tibble: 6 × 4
     id   max   min     n
  <dbl> <int> <int> <int>
1    16    43     1    43
2    17    43     1    43
3    18    43     1    43
4    19    43     1    43
5    20    43     1    43
6    40    43     1    43
Code
collisions_weather_corr_rect_quebec_synth <-
  collisions_weather_corr_rect |>
  filter(date_num <= unclass(race_date) + 7) |>
  group_by(tr_contr, year, yday_corr, date) |>
  summarise(
    sum_velocidad = sum(velocidad, na.rm = TRUE),
    exp_veh       = sum(vehicles_use_type,        na.rm = TRUE), # Nº de vehículos expuestos
    exp_lic       = sum(license_holders_sex_age,  na.rm = TRUE), # Nº de conductores
    median_lag_2_prec_median_imp = median(lag_2_prec_median_lin_imp, na.rm = TRUE),
    mean_min_temp_mean_lin       = mean(min_temp_mean_lin,       na.rm = TRUE),
    mean_max_temp_mean_lin       = mean(max_temp_mean_lin,       na.rm = TRUE),
    median_total_precip_median_lin = median(total_precip_median_lin, na.rm = TRUE),
    .groups = "drop"
  ) |>
  mutate(                                     # ── aux variables
    rate_veh = sum_velocidad / exp_veh * 1e6, # not to model
    rate_lic = sum_velocidad / exp_lic * 1e6, # not to model
    off_veh  = log(exp_veh),                  # offset
    off_lic  = log(exp_lic)
  ) |>
  left_join(races, by = c("date" = "race_date")) |>
  mutate(
    race_day = if_else(!is.na(year.y), 1L, 0L),
    id       = as.numeric(factor(paste0(tr_contr, year.x)))
  ) |>
  select(-year.y)


collisions_weather_corr_rect_quebec_synth_tr <- 
  collisions_weather_corr_rect_quebec_synth %>%              # start from the long table
  filter(tr_contr == "treatment") %>%            # keep treated rows only
  group_by(tr_contr, yday_corr) %>%              # collapse across MRC × years
  summarise(
    sum_velocidad = sum(sum_velocidad, na.rm = TRUE),
    exp_veh       = sum(exp_veh,       na.rm = TRUE),
    exp_lic       = sum(exp_lic,       na.rm = TRUE),
    mean_median_lag_2_prec_median_imp   = mean(median_lag_2_prec_median_imp,   na.rm = TRUE),
    mean_min_temp_mean_lin              = mean(mean_min_temp_mean_lin,         na.rm = TRUE),
    mean_max_temp_mean_lin              = mean(mean_max_temp_mean_lin,         na.rm = TRUE),
    mean_median_total_precip_median_lin = mean(median_total_precip_median_lin, na.rm = TRUE),
    .groups = "drop"
  ) %>% 
  mutate(
    rate_veh = sum_velocidad / exp_veh * 1e6,
    rate_lic = sum_velocidad / exp_lic * 1e6,
    off_veh  = log(exp_veh),
    off_lic  = log(exp_lic),
    year.x   = 2000,                       # dummy year / date just to keep structure
    date     = as.Date("2019-05-19"),
    race_day = 0L,
    id       = 40                          # single ID for the synthetic treated series
  )

controls_quebec_long <- collisions_weather_corr_rect_quebec_synth %>% 
  filter(tr_contr != "treatment") %>% 
  rename(
    mean_median_lag_2_prec_median_imp   = median_lag_2_prec_median_imp,
    mean_median_total_precip_median_lin = median_total_precip_median_lin
  )

collisions_weather_corr_rect_quebec_synth_one_tr_many_cntrls <- bind_rows(
  controls_quebec_long,
  collisions_weather_corr_rect_quebec_synth_tr          # the single treated series from step 1
)


collisions_weather_corr_rect_quebec_synth_cntr <- 
  collisions_weather_corr_rect_quebec_synth %>% 
  filter(tr_contr != "treatment") %>%          # keep controls
  group_by(tr_contr, yday_corr) %>%            # collapse across all control IDs
  summarise(
    sum_velocidad = sum(sum_velocidad, na.rm = TRUE),
    exp_veh       = sum(exp_veh,       na.rm = TRUE),
    exp_lic       = sum(exp_lic,       na.rm = TRUE),
    
    # distributional summaries to keep a sense of variability
    p25_lag_2_prec_median_imp   = quantile(median_lag_2_prec_median_imp,   .25, na.rm = TRUE),
    p75_lag_2_prec_median_imp   = quantile(median_lag_2_prec_median_imp,   .75, na.rm = TRUE),
    mean_median_lag_2_prec_median_imp   = mean(median_lag_2_prec_median_imp,   na.rm = TRUE),
    
    p25_min_temp_mean_lin = quantile(mean_min_temp_mean_lin, .25, na.rm = TRUE),
    p75_min_temp_mean_lin = quantile(mean_min_temp_mean_lin, .75, na.rm = TRUE),
    mean_min_temp_mean_lin = mean(mean_min_temp_mean_lin, na.rm = TRUE),
    
    p25_max_temp_mean_lin = quantile(mean_max_temp_mean_lin, .25, na.rm = TRUE),
    p75_max_temp_mean_lin = quantile(mean_max_temp_mean_lin, .75, na.rm = TRUE),
    mean_max_temp_mean_lin = mean(mean_max_temp_mean_lin, na.rm = TRUE),
    
    p25_total_precip_median_lin = quantile(median_total_precip_median_lin, .25, na.rm = TRUE),
    p75_total_precip_median_lin = quantile(median_total_precip_median_lin, .75, na.rm = TRUE),
    mean_median_total_precip_median_lin = mean(median_total_precip_median_lin, na.rm = TRUE),
    .groups = "drop"
  ) %>% 
  mutate(
    rate_veh = sum_velocidad / exp_veh * 1e6,
    rate_lic = sum_velocidad / exp_lic * 1e6,
    off_veh  = log(exp_veh),
    off_lic  = log(exp_lic),
    year.x   = 2000,
    date     = as.Date("2019-05-19"),
    race_day = 0L,
    id       = 1                               # single ID for the synthetic control
  ) %>% 
  # rename the median variables so their names match the treated table
  rename(
    mean_median_lag_2_prec_median_imp   = mean_median_lag_2_prec_median_imp,
    mean_median_total_precip_median_lin = mean_median_total_precip_median_lin
  )

collisions_weather_corr_rect_quebec_synth_cntr_tr <- 
  rbind.data.frame(collisions_weather_corr_rect_quebec_synth_cntr,collisions_weather_corr_rect_quebec_synth_tr |> 
  mutate(p25_lag_2_prec_median_imp=0, p75_lag_2_prec_median_imp=0, p25_min_temp_mean_lin=0, p75_min_temp_mean_lin=0, 
         p25_max_temp_mean_lin=0, p75_max_temp_mean_lin=0, p25_total_precip_median_lin=0, p75_total_precip_median_lin=0))|> data.frame()

Test series structure

Code
acf(collisions_weather_corr_rect_synth_tr$rate_veh*10,  lag.max = 50, main = "Treatments\nACF:high-speed tickets per 1MM vehicles")
pacf(collisions_weather_corr_rect_synth_tr$rate_veh*10, lag.max = 50, main = "Treatments\nPACF:high-speed tickets per 1MM vehicles")
ACF plots, treated

ACF plots, treated

ACF plots, treated

ACF plots, treated

We noticed a partial autocorrelation of 10 days in the treated.

Code
acf(collisions_weather_corr_rect_synth_cntr$rate_veh*10,  lag.max = 50, main = "Controls,\nACF:high-speed tickets per 1MM vehicles")
pacf(collisions_weather_corr_rect_synth_cntr$rate_veh*10, lag.max = 50, main = "Controls,\nPACF:high-speed tickets per 1MM vehicles")
ACF plots, controls

ACF plots, controls

ACF plots, controls

ACF plots, controls

We noticed a partial autocorrelation of 25 days in the controls.

Code
x_vals <- time(forecast::msts(
    collisions_weather_corr_rect_synth_cntr$rate_veh*10,
    start = c(2019, 1),
    seasonal.periods = c(7)
))


forecast::autoplot(forecast::mstl(forecast::msts(collisions_weather_corr_rect_synth_tr$rate_veh*10,start = c(2019, 1),seasonal.periods = c(7)), robust = TRUE))+ ggtitle("Treated")+
  scale_x_continuous(
    breaks = seq(2019, 2025, by = 1/7)[seq(1, 43, by = 7)],          # where the ticks sit
    labels = seq(-35, 7)[seq(1, 43, by = 7)]         # what the ticks say
  ) +
  geom_vline(
    xintercept = seq(2019, 2025, by = 1/7)[which.min(abs(seq(-35, 7)))],   # rel_day 0
    linetype   = "dashed", colour = "black")
  

forecast::autoplot(forecast::mstl(forecast::msts(collisions_weather_corr_rect_synth_cntr$rate_veh*10,start = c(2019, 1),seasonal.periods = c(7)), robust = TRUE))+ ggtitle("Controls")+
    scale_x_continuous(
    breaks = seq(2019, 2025, by = 1/7)[seq(1, 43, by = 7)],          # where the ticks sit
    labels = seq(-35, 7)[seq(1, 43, by = 7)]         # what the ticks say
  ) +
  geom_vline(
    xintercept = seq(2019, 2025, by = 1/7)[which.min(abs(seq(-35, 7)))],   # rel_day 0
    linetype   = "dashed", colour = "black")
Weekly decomposition of series

Weekly decomposition of series

Weekly decomposition of series

Weekly decomposition of series

The decomposition shows that differences between treated and control groups emerge in the trend, not in weekly seasonality, with treated show a downward trend, while controls show an upward trend. Any modelling or graphical comparison should therefore adjust for the weekly effect and focus on the diverging long-term trajectory.

We visualize series differentiating by treated or control cities.

Code
ggplot(collisions_weather_corr_rect_synth_cntr_tr, aes(x = yday_corr, y = sum_velocidad)) +
  geom_line(aes(group = id, colour = ifelse(id==40,"Treated", "Controls"))) +
  scale_colour_brewer(palette = "Dark2", name = "MRC") +
  geom_vline(xintercept = 30, linetype = "dashed", color = "black", linewidth=1) +
  labs(x = "Days (-30 to 5 days post-treatment)", y = "Number of High‑speed Tickets (sum)") +
  theme_minimal() +
  theme(panel.grid.minor = element_blank(), legend.position = "bottom") 
Daily High‑speed tickets per Treated/Control cities

Daily High‑speed tickets per Treated/Control cities

Code
ggplot(collisions_weather_corr_rect_synth_cntr_tr, aes(x = yday_corr, y = (sum_velocidad/exp_veh)*1e6)) +
  geom_line(aes(group = id, colour = ifelse(id==40,"Treated", "Controls"))) +
  scale_colour_brewer(palette = "Dark2", name = "MRC") +
  geom_vline(xintercept = 30, linetype = "dashed", color = "black", linewidth=1) +
  labs(x = "Days (-30 to 5 days post-treatment)", y = "Number of Daily high-speed collisions per 1MM vehicle counts (sum)") +
  theme_minimal() +
  theme(panel.grid.minor = element_blank(), legend.position = "bottom") 
Rate of daily high-speed collisions per 1MM vehicle counts by Treated/Control status

Rate of daily high-speed collisions per 1MM vehicle counts by Treated/Control status

Code
ggplot(collisions_weather_corr_rect_synth_cntr_tr, aes(x = yday_corr, y = mean_min_temp_mean_lin)) +
  geom_line(aes(group = id, colour = ifelse(id==40,"Treated", "Controls"))) +
  scale_colour_brewer(palette = "Dark2", name = "MRC") +
  geom_vline(xintercept = 30, linetype = "dashed", color = "black", linewidth=1) +
  labs(x = "Days (-30 to 5 days post-treatment)", y = "Min temperature") +
    geom_ribbon(
    data = collisions_weather_corr_rect_synth_cntr,
    aes(x = yday_corr, y = mean_min_temp_mean_lin,
        ymin = p25_min_temp_mean_lin,
        ymax = p75_min_temp_mean_lin),
    inherit.aes = FALSE,
    fill = "grey70",  # o un color que combine con tu paleta
    alpha = 0.3
  )+
  theme_minimal() +
  theme(panel.grid.minor = element_blank(), legend.position = "bottom") 
Min temperature by Treated/Control status

Min temperature by Treated/Control status

Code
ggplot(collisions_weather_corr_rect_synth_cntr_tr, aes(x = yday_corr, y = mean_max_temp_mean_lin)) +
  geom_line(aes(group = id, colour = ifelse(id==40,"Treated", "Controls"))) +
  scale_colour_brewer(palette = "Dark2", name = "MRC") +
  geom_vline(xintercept = 30, linetype = "dashed", color = "black", linewidth=1) +
  labs(x = "Days (-30 to 5 days post-treatment)", y = "Max temperature") +
    geom_ribbon(
    data = subset(collisions_weather_corr_rect_synth_cntr_tr, id != 40),
    aes(x = yday_corr, y = mean_max_temp_mean_lin,
        ymin = p25_max_temp_mean_lin,
        ymax = p75_max_temp_mean_lin),
    inherit.aes = FALSE,
    fill = "grey70",  # o un color que combine con tu paleta
    alpha = 0.3
  )+
  theme_minimal() +
  theme(panel.grid.minor = element_blank(), legend.position = "bottom") 
Max temperature by Treated/Control status

Max temperature by Treated/Control status

Code
ggplot(collisions_weather_corr_rect_synth_cntr_tr, aes(x = yday_corr, y = mean_median_total_precip_median_lin)) +
  geom_line(aes(group = id, colour = ifelse(id==40,"Treated", "Controls"))) +
  scale_colour_brewer(palette = "Dark2", name = "MRC") +
  geom_vline(xintercept = 30, linetype = "dashed", color = "black", linewidth=1) +
  labs(x = "Days (-30 to 5 days post-treatment)", y = "Precipitations \n(annual mean of city-level medians, where each city median\nis computed from the medians of its weather stations)") +
    geom_ribbon(
    data = subset(collisions_weather_corr_rect_synth_cntr_tr, id != 40),
    aes(x = yday_corr, y = mean_median_total_precip_median_lin,
        ymin = p25_total_precip_median_lin,
        ymax = p75_total_precip_median_lin),
    inherit.aes = FALSE,
    fill = "grey70",  # o un color que combine con tu paleta
    alpha = 0.3
  )+
  theme_minimal() +
  theme(panel.grid.minor = element_blank(), legend.position = "bottom") 
Total precipitations by Treated/Control status

Total precipitations by Treated/Control status

Code
ggplot(collisions_weather_corr_rect_synth_cntr_tr, aes(x = yday_corr, y = mean_median_lag_2_prec_median_imp)) +
  geom_line(aes(group = id, colour = ifelse(id==40,"Treated", "Controls"))) +
  scale_colour_brewer(palette = "Dark2", name = "MRC") +
  geom_vline(xintercept = 30, linetype = "dashed", color = "black", linewidth=1) +
  labs(x = "Days (-30 to 5 days post-treatment)", y = "Precipitations (2-day lagged)\n(annual mean of city-level medians, where each city median\nis computed from the medians of its weather stations)") +
    geom_ribbon(
    data = subset(collisions_weather_corr_rect_synth_cntr_tr, id != 40),
    aes(x = yday_corr, y = mean_median_lag_2_prec_median_imp,
        ymin = p25_lag_2_prec_median_imp,
        ymax = p75_lag_2_prec_median_imp),
    inherit.aes = FALSE,
    fill = "grey70",  # o un color que combine con tu paleta
    alpha = 0.3
  )+
  theme_minimal() +
  theme(panel.grid.minor = element_blank(), legend.position = "bottom") 
Total precipitations (lagged 2) by Treated/Control status

Total precipitations (lagged 2) by Treated/Control status

Analysis plan

Code
library(DiagrammeR)

gr <- 
grViz("
digraph study_design {

graph [layout = dot, rankdir = TB]

node [shape = rectangle, style = filled, color = LightSkyBlue, fontname = Helvetica]

start [label = 'Analysis Start']

adelanto [label = 'Advance two days (D)']
no_adelanto [label = 'No advance\n(from race day) (D_off)']

quebec_only [label = 'Quebec only as control (D)']
quebec_sherbrooke [label = 'Quebec and Sherbrooke\nas control (D_sens)']

exp_3days [label = 'Exposure 3 days\nafter the race (D)']
exp_7days [label = 'Exposure 7 days\nafter the race (D7)']

# edges
start -> {adelanto no_adelanto}

adelanto -> {quebec_only quebec_sherbrooke}

quebec_only -> {exp_3days exp_7days}

}", 
  width = 1000,
  height = 1400)
gr
Code
unlink(paste0(getwd(),"/_figs/analysisplan_files"), recursive = TRUE)
htmlwidgets::saveWidget(gr, paste0(getwd(),"/_figs/analysisplan.html"))
webshot::webshot(paste0(getwd(),"/_figs/analysisplan.html"), 
                 paste0(getwd(),"/_figs/analysisplan.png"),
                 vwidth = 300, vheight = 300*1.2,  zoom=10, expand=100)  # Test w/ diff coords:top, left, width, and height


Session info

Code
cat(paste0("R library: ", Sys.getenv("R_LIBS_USER")))
cat(paste0("Date: ",withr::with_locale(new = c('LC_TIME' = 'C'), code =Sys.time())))
cat(paste0("Editor context: ", getwd()))
cat("quarto version: "); system("quarto --version") 

quarto::quarto_version()

save.image("_data/step12.RData")
saveRDS(collisions_weather_corr, file = "_data/collisions_weather.rds", ascii = FALSE, version = NULL, compress = TRUE, refhook = NULL)
R library: H:/Mi unidad/PERSONAL ANDRES/UCH_salud_publica/pasantia/f1/f1/renv/library/windows/R-4.4/x86_64-w64-mingw32Date: 2025-07-07 17:20:17.812539Editor context: H:/Mi unidad/PERSONAL ANDRES/UCH_salud_publica/pasantia/f1/f1quarto version: [1] 0
[1] '1.6.39'
Code
sesion_info <- devtools::session_info()
Warning in system2("quarto", "-V", stdout = TRUE, env = paste0("TMPDIR=", : el
comando ejecutado '"quarto"
TMPDIR=C:/Users/andre/AppData/Local/Temp/RtmpysUYja/file6aac5d727469 -V' tiene
el estatus 1
Code
dplyr::select(
  tibble::as_tibble(sesion_info$packages),
  c(package, loadedversion, source)
) |> 
 knitr::kable(caption = "R packages", format = "html",
      col.names = c("Row number", "Package", "Version"),
    row.names = FALSE,
      align = c("c", "l", "r")) |> 
  kableExtra::kable_styling(bootstrap_options = c("striped", "hover"),font_size = 12)|> 
  kableExtra::scroll_box(width = "100%", height = "375px")  
R packages
Row number Package Version
abind 1.4-8 RSPM
assertthat 0.2.1 RSPM
backports 1.5.0 RSPM
bayesplot 1.12.0 RSPM
bdsmatrix 1.3-7 RSPM
bit 4.6.0 CRAN (R 4.4.3)
bit64 4.6.0-1 CRAN (R 4.4.3)
Boom 0.9.15 RSPM
BoomSpikeSlab 1.2.6 RSPM
boot 1.3-30 CRAN (R 4.4.1)
bpmn 0.1.0 Github (bergant/bpmn@628d3efaa27544c221b2fd7f1895301a63b70c49)
bridgesampling 1.1-2 RSPM
brms 2.22.0 RSPM
Brobdingnag 1.2-9 RSPM
broom 1.0.8 RSPM
bsts 0.9.10 RSPM
cachem 1.1.0 CRAN (R 4.4.3)
callr 3.7.6 RSPM
car 3.1-3 RSPM
carData 3.0-5 RSPM
CausalImpact 1.3.0 RSPM
CBPS 0.23 RSPM
checkmate 2.3.2 RSPM
cli 3.6.5 RSPM
coda 0.19-4.1 RSPM
codetools 0.2-20 CRAN (R 4.4.1)
collapse 2.1.2 RSPM
colorspace 2.1-1 RSPM
conquer 1.3.3 RSPM
cubature 2.1.4 RSPM
curl 6.2.3 CRAN (R 4.4.1)
CVXR 1.0-15 RSPM
dagitty 0.3-4 RSPM
data.table 1.17.4 RSPM
devtools 2.4.5 RSPM
DHARMa 0.4.7 RSPM
DiagrammeR 1.0.11 RSPM
digest 0.6.37 RSPM
distributional 0.5.0 RSPM
doParallel 1.0.17 RSPM
doRNG 1.8.6.2 RSPM
doSNOW 1.0.20 RSPM
dplyr 1.1.4 RSPM
dreamerr 1.5.0 RSPM
ECOSolveR 0.5.5 RSPM
ellipsis 0.3.2 RSPM
emmeans 1.11.1 RSPM
estimability 1.5.1 RSPM
evaluate 1.0.3 RSPM
farver 2.1.2 RSPM
fastDummies 1.7.5 RSPM
fastmap 1.2.0 CRAN (R 4.4.3)
fect 1.0.0 RSPM
fixest 0.12.1 RSPM
forcats 1.0.0 RSPM
foreach 1.5.2 RSPM
forecast 8.24.0 RSPM
Formula 1.2-5 RSPM
fracdiff 1.5-3 RSPM
fs 1.6.6 RSPM
future 1.49.0 RSPM
future.apply 1.11.3 RSPM
geeM 0.10.1 RSPM
geepack 1.3.12 RSPM
generics 0.1.4 RSPM
geosphere 1.5-20 RSPM
GGally 2.2.1 RSPM
ggdag 0.2.13 RSPM
ggplot2 3.5.2 RSPM
ggstats 0.9.0 RSPM
glmmTMB 1.1.11 RSPM
glmnet 4.1-9 RSPM
glmx 0.2-1 RSPM
globals 0.18.0 RSPM
glue 1.8.0 RSPM
gmp 0.7-5 RSPM
gnm 1.1-5 RSPM
gridExtra 2.3 RSPM
gtable 0.3.6 RSPM
gtools 3.9.5 RSPM
hms 1.1.3 CRAN (R 4.4.3)
htmltools 0.5.8.1 RSPM
htmlwidgets 1.6.4 RSPM
httpuv 1.6.16 RSPM
httr2 1.1.2 RSPM
igraph 2.1.4 RSPM
iterators 1.0.14 RSPM
jsonlite 2.0.0 CRAN (R 4.4.3)
kableExtra 1.4.0 RSPM
kernlab 0.9-33 RSPM
knitr 1.50 RSPM
labeling 0.4.3 RSPM
later 1.4.2 RSPM
lattice 0.22-6 CRAN (R 4.4.1)
lfe 3.1.1 RSPM
lifecycle 1.0.4 RSPM
listenv 0.9.1 RSPM
lme4 1.1-37 RSPM
lmtest 0.9-40 RSPM
loo 2.8.0 RSPM
lubridate 1.9.4 RSPM
magrittr 2.0.3 RSPM
MASS 7.3-60.2 CRAN (R 4.4.1)
MatchIt 4.7.2 RSPM
Matrix 1.7-0 CRAN (R 4.4.1)
MatrixModels 0.5-4 RSPM
matrixStats 1.5.0 RSPM
maxLik 1.5-2.1 RSPM
memoise 2.0.1 CRAN (R 4.4.3)
mgcv 1.9-1 CRAN (R 4.4.1)
mime 0.13 CRAN (R 4.4.3)
miniUI 0.1.2 RSPM
minqa 1.2.8 RSPM
miscTools 0.6-28 RSPM
mvtnorm 1.3-3 RSPM
nixtlar 0.6.2 RSPM
nlme 3.1-164 CRAN (R 4.4.1)
nloptr 2.2.1 RSPM
nnet 7.3-19 CRAN (R 4.4.1)
np 0.60-18 RSPM
numDeriv 2016.8-1.1 RSPM
openxlsx 4.2.8 RSPM
optimx 2025-4.9 RSPM
PanelMatch 3.1.1 RSPM
parallelly 1.44.0 RSPM
pillar 1.10.2 RSPM
pkgbuild 1.4.8 RSPM
pkgconfig 2.0.3 RSPM
pkgload 1.4.0 RSPM
plm 2.6-6 RSPM
plyr 1.8.9 RSPM
posterior 1.6.1 RSPM
pracma 2.4.4 CRAN (R 4.4.1)
processx 3.8.6 RSPM
profvis 0.4.0 RSPM
promises 1.3.2 RSPM
ps 1.9.1 RSPM
purrr 1.0.4 RSPM
Qtools 1.5.9 RSPM
quadprog 1.5-8 RSPM
quantdr 1.2.2 RSPM
quantmod 0.4.28 RSPM
quantreg 6.1 RSPM
quarto 1.4.4 RSPM
qvcalc 1.0.4 RSPM
R6 2.6.1 RSPM
rappdirs 0.3.3 CRAN (R 4.4.3)
rbibutils 2.3 RSPM
RColorBrewer 1.1-3 RSPM
Rcpp 1.0.14 RSPM
RcppParallel 5.1.10 RSPM
Rdpack 2.6.4 RSPM
readr 2.1.5 CRAN (R 4.4.3)
reformulas 0.4.1 RSPM
relimp 1.0-5 RSPM
remotes 2.5.0 RSPM
renv 1.1.2 CRAN (R 4.4.1)
reshape2 1.4.4 RSPM
rgenoud 5.9-0.11 RSPM
rio 1.2.3 RSPM
rlang 1.1.6 RSPM
rmarkdown 2.29 RSPM
Rmpfr 1.1-0 RSPM
rngtools 1.5.2 RSPM
rstantools 2.4.0 RSPM
rstudioapi 0.17.1 RSPM
sandwich 3.1-1 RSPM
scales 1.4.0 RSPM
scpi 3.0.0 RSPM
sessioninfo 1.2.3 RSPM
shape 1.4.6.1 RSPM
shiny 1.10.0 RSPM
snow 0.4-4 RSPM
sp 2.2-0 RSPM
SparseM 1.84-2 RSPM
stringi 1.8.7 RSPM
stringmagic 1.2.0 RSPM
stringr 1.5.1 RSPM
survival 3.6-4 CRAN (R 4.4.1)
svglite 2.2.1 RSPM
Synth 1.1-8 RSPM
systemfonts 1.2.3 RSPM
tensorA 0.36.2.1 RSPM
textshaping 1.0.1 RSPM
tibble 3.2.1 RSPM
tidygraph 1.3.1 RSPM
tidyr 1.3.1 RSPM
tidyselect 1.2.1 RSPM
tidyverse 2.0.0 RSPM
timechange 0.3.0 RSPM
timeDate 4041.110 RSPM
TMB 1.9.17 RSPM
tseries 0.10-58 RSPM
TTR 0.24.4 RSPM
tzdb 0.5.0 CRAN (R 4.4.3)
urca 1.3-4 RSPM
urlchecker 1.0.1 RSPM
usethis 3.1.0 RSPM
V8 6.0.3 RSPM
vctrs 0.6.5 RSPM
viridisLite 0.4.2 RSPM
visNetwork 2.1.2 RSPM
weathercan 0.7.3.9000 https://ropensci.r-universe.dev (R 4.4.3)
webshot 0.5.5 RSPM
withr 3.0.2 RSPM
xfun 0.52 RSPM
xml2 1.3.8 CRAN (R 4.4.3)
xtable 1.8-4 RSPM
xts 0.14.1 RSPM
yaml 2.3.10 RSPM
zip 2.3.3 RSPM
zoo 1.8-14 RSPM
Code
reticulate::py_list_packages()%>% 
 knitr::kable(caption = "Python packages", format = "html",
      col.names = c("Package", "Version", "Requirement"),
    row.names = FALSE,
      align = c("c", "l", "r", "r"))%>% 
  kableExtra::kable_styling(bootstrap_options = c("striped", "hover"),font_size = 12)|>
  kableExtra::scroll_box(width = "100%", height = "375px")  
Python packages
Package Version Requirement
absl-py 2.1.0 absl-py==2.1.0
asttokens 2.4.1 asttokens==2.4.1
astunparse 1.6.3 astunparse==1.6.3
audioconverter 2.0.3 audioconverter==2.0.3
autograd 1.6.2 autograd==1.6.2
autograd-gamma 0.5.0 autograd-gamma==0.5.0
beautifulsoup4 4.12.3 beautifulsoup4==4.12.3
Brotli 1.1.0 Brotli==1.1.0
certifi 2023.11.17 certifi==2023.11.17
cffi 1.16.0 cffi==1.16.0
charset-normalizer 3.3.2 charset-normalizer==3.3.2
clarabel 0.9.0 clarabel==0.9.0
click 8.1.7 click==8.1.7
cloudpickle 3.0.0 cloudpickle==3.0.0
colorama 0.4.6 colorama==0.4.6
comm 0.2.1 comm==0.2.1
contourpy 1.2.0 contourpy==1.2.0
cvxopt 1.3.2 cvxopt==1.3.2
cvxpy 1.5.2 cvxpy==1.5.2
cycler 0.12.1 cycler==0.12.1
debugpy 1.8.0 debugpy==1.8.0
decorator 4.4.2 decorator==4.4.2
delete-chrome-history-py 0.1.8 delete-chrome-history-py==0.1.8
easyocr 1.7.1 easyocr==1.7.1
ecos 2.0.13 ecos==2.0.13
editdistance 0.8.1 editdistance==0.8.1
efficientnet 1.0.0 efficientnet==1.0.0
essential-generators 1.0 essential-generators==1.0
et-xmlfile 1.1.0 et-xmlfile==1.1.0
executing 2.0.1 executing==2.0.1
fancyimpute 0.7.0 fancyimpute==0.7.0
ffmpeg 1.4 ffmpeg==1.4
ffmpeg-python 0.2.0 ffmpeg-python==0.2.0
filedir 0.0.3 filedir==0.0.3
filelock 3.13.1 filelock==3.13.1
flatbuffers 24.3.25 flatbuffers==24.3.25
fonttools 4.47.2 fonttools==4.47.2
formulaic 1.0.1 formulaic==1.0.1
fsspec 2023.12.2 fsspec==2023.12.2
future 0.18.3 future==0.18.3
gast 0.6.0 gast==0.6.0
git-filter-repo 2.45.0 git-filter-repo==2.45.0
google-pasta 0.2.0 google-pasta==0.2.0
graphviz 0.20.3 graphviz==0.20.3
grpcio 1.65.4 grpcio==1.65.4
gTTS 2.5.1 gTTS==2.5.1
h5py 3.11.0 h5py==3.11.0
idna 3.6 idna==3.6
imageio 2.34.2 imageio==2.34.2
imageio-ffmpeg 0.5.1 imageio-ffmpeg==0.5.1
imgaug 0.4.0 imgaug==0.4.0
iniconfig 2.0.0 iniconfig==2.0.0
interface-meta 1.3.0 interface-meta==1.3.0
ipykernel 6.29.5 ipykernel==6.29.5
ipython 8.20.0 ipython==8.20.0
jedi 0.19.1 jedi==0.19.1
Jinja2 3.1.3 Jinja2==3.1.3
joblib 1.4.0 joblib==1.4.0
jupyter_client 8.6.0 jupyter_client==8.6.0
jupyter_core 5.7.1 jupyter_core==5.7.1
keras 3.4.1 keras==3.4.1
Keras-Applications 1.0.8 Keras-Applications==1.0.8
keras-ocr 0.9.3 keras-ocr==0.9.3
kiwisolver 1.4.5 kiwisolver==1.4.5
knnimpute 0.1.0 knnimpute==0.1.0
lazy_loader 0.4 lazy_loader==0.4
libclang 18.1.1 libclang==18.1.1
lifelines 0.28.0 lifelines==0.28.0
llvmlite 0.41.1 llvmlite==0.41.1
Markdown 3.6 Markdown==3.6
markdown-it-py 3.0.0 markdown-it-py==3.0.0
MarkupSafe 2.1.4 MarkupSafe==2.1.4
matplotlib 3.8.2 matplotlib==3.8.2
matplotlib-inline 0.1.6 matplotlib-inline==0.1.6
mdurl 0.1.2 mdurl==0.1.2
mido 1.3.3 mido==1.3.3
ml-dtypes 0.4.0 ml-dtypes==0.4.0
more-itertools 10.2.0 more-itertools==10.2.0
moviepy 1.0.3 moviepy==1.0.3
mpmath 1.3.0 mpmath==1.3.0
multipledispatch 1.0.0 multipledispatch==1.0.0
mutagen 1.47.0 mutagen==1.47.0
namex 0.0.8 namex==0.0.8
natsort 8.4.0 natsort==8.4.0
nest-asyncio 1.5.9 nest-asyncio==1.5.9
networkx 3.2.1 networkx==3.2.1
ninja 1.11.1.1 ninja==1.11.1.1
nose 1.3.7 nose==1.3.7
numba 0.58.1 numba==0.58.1
numexpr 2.10.0 numexpr==2.10.0
numpy 1.26.3 numpy==1.26.3
openai-whisper 20231117 openai-whisper==20231117
opencv-python 4.10.0.84 opencv-python==4.10.0.84
opencv-python-headless 4.10.0.84 opencv-python-headless==4.10.0.84
openpyxl 3.1.4 openpyxl==3.1.4
opt-einsum 3.3.0 opt-einsum==3.3.0
optree 0.12.1 optree==0.12.1
osqp 0.6.5 osqp==0.6.5
packaging 23.2 packaging==23.2
pandas 2.2.0 pandas==2.2.0
pandas-flavor 0.6.0 pandas-flavor==0.6.0
parso 0.8.3 parso==0.8.3
patsy 0.5.6 patsy==0.5.6
pillow 10.2.0 pillow==10.2.0
platformdirs 4.1.0 platformdirs==4.1.0
pluggy 1.5.0 pluggy==1.5.0
polars 1.9.0 polars==1.9.0
proglog 0.1.10 proglog==0.1.10
prompt-toolkit 3.0.43 prompt-toolkit==3.0.43
protobuf 4.25.4 protobuf==4.25.4
psutil 5.9.8 psutil==5.9.8
pure-eval 0.2.2 pure-eval==0.2.2
pyarrow 15.0.0 pyarrow==15.0.0
pyclipper 1.3.0.post5 pyclipper==1.3.0.post5
pycparser 2.22 pycparser==2.22
pycryptodomex 3.20.0 pycryptodomex==3.20.0
pydotplus 2.0.2 pydotplus==2.0.2
pydub 0.24.1 pydub==0.24.1
Pygments 2.17.2 Pygments==2.17.2
pyjanitor 0.26.0 pyjanitor==0.26.0
PyMuPDF 1.24.9 PyMuPDF==1.24.9
PyMuPDFb 1.24.9 PyMuPDFb==1.24.9
pyparsing 3.1.1 pyparsing==3.1.1
PyPDF2 3.0.1 PyPDF2==3.0.1
pyreadr 0.5.0 pyreadr==0.5.0
pytesseract 0.3.10 pytesseract==0.3.10
pytest 8.3.1 pytest==8.3.1
python-bidi 0.6.0 python-bidi==0.6.0
python-dateutil 2.8.2 python-dateutil==2.8.2
pytube 15.0.0 pytube==15.0.0
pytube3 9.6.4 pytube3==9.6.4
pytz 2023.3.post1 pytz==2023.3.post1
pywin32 306 pywin32==306
PyYAML 6.0.1 PyYAML==6.0.1
pyzmq 25.1.2 pyzmq==25.1.2
qdldl 0.1.7.post1 qdldl==0.1.7.post1
regex 2023.12.25 regex==2023.12.25
requests 2.32.3 requests==2.32.3
rich 13.7.1 rich==13.7.1
rpy2 3.5.16 rpy2==3.5.16
scikit-image 0.24.0 scikit-image==0.24.0
scikit-learn 1.3.2 scikit-learn==1.3.2
scikit-survival 0.22.2 scikit-survival==0.22.2
scipy 1.11.4 scipy==1.11.4
scs 3.2.6 scs==3.2.6
seaborn 0.13.2 seaborn==0.13.2
semantic-version 2.10.0 semantic-version==2.10.0
setuptools-rust 1.8.1 setuptools-rust==1.8.1
shapely 2.0.5 shapely==2.0.5
six 1.16.0 six==1.16.0
soupsieve 2.5 soupsieve==2.5
SpeechRecognition 3.10.1 SpeechRecognition==3.10.1
spyder-kernels 2.5.2 spyder-kernels==2.5.2
stack-data 0.6.3 stack-data==0.6.3
statsmodels 0.14.1 statsmodels==0.14.1
sympy 1.12 sympy==1.12
target 0.0.11 target==0.0.11
tensorboard 2.17.0 tensorboard==2.17.0
tensorboard-data-server 0.7.2 tensorboard-data-server==0.7.2
tensorflow 2.17.0 tensorflow==2.17.0
tensorflow-intel 2.17.0 tensorflow-intel==2.17.0
tensorflow-io-gcs-filesystem 0.31.0 tensorflow-io-gcs-filesystem==0.31.0
termcolor 2.4.0 termcolor==2.4.0
threadpoolctl 3.4.0 threadpoolctl==3.4.0
tifffile 2024.7.24 tifffile==2024.7.24
tiktoken 0.5.2 tiktoken==0.5.2
torch 2.4.0 torch==2.4.0
torchaudio 2.4.0 torchaudio==2.4.0
torchvision 0.19.0 torchvision==0.19.0
tornado 6.4 tornado==6.4
tqdm 4.66.1 tqdm==4.66.1
traitlets 5.14.1 traitlets==5.14.1
translator 0.0.9 translator==0.0.9
typing_extensions 4.9.0 typing_extensions==4.9.0
tzdata 2023.4 tzdata==2023.4
tzlocal 5.2 tzlocal==5.2
urllib3 2.1.0 urllib3==2.1.0
validators 0.33.0 validators==0.33.0
watchdog 3.0.0 watchdog==3.0.0
wcwidth 0.2.13 wcwidth==0.2.13
websockets 12.0 websockets==12.0
Werkzeug 3.0.3 Werkzeug==3.0.3
whisper 1.1.10 whisper==1.1.10
wrapt 1.16.0 wrapt==1.16.0
xarray 2024.1.1 xarray==2024.1.1
youtube-dl 2021.12.17 youtube-dl==2021.12.17
yt-dlp 2024.7.9 yt-dlp==2024.7.9

References

1.
LaZerte SE, Albers S. weathercan: Download and format weather data from environment and climate change canada. The Journal of Open Source Software. 2018;3(22):571. https://joss.theoj.org/papers/10.21105/joss.00571.